82 research outputs found

    Experiences on Evaluating Network Simulators: A Methodological Approach

    Get PDF
    International audienceThere exists a variety of network simulators, used to imitate the protocols, nodes, and connections in data networks. They differ in their design, goals, and characteristics. Thus, comparing simulators requires a clear and standardized methodology. In this paper, based on a set of measurable and comparable criteria, we propose an approach to evaluate them. We validate the suggested approach with two network simulators, namely Packet Tracer and GNS3. In that regard, a test scenario is put forward on the two simulators, both in Linux and Windows environments, and their performance is monitored based on the suggested approach. This paper does not propose a method for selecting the best simulator, but it rather supplies the researchers with an evaluation tool, that can be used to describe, compare, and select the most suitable network simulators for a given scenario

    Reliable Composite Web Services Execution: Towards a Dynamic Recovery Decision

    Get PDF
    AbstractDuring the execution of a Composite Web Service (CWS), different faults may occur that cause WSs failures. There exist strategies that can be applied to repair these failures, such as: WS retry, WS substitution, compensation, roll-back, or replication. Each strategy has advantages and disadvantages on different execution scenarios and can produce different impact on the CWS QoS. Hence, it is important to define a dynamic fault tolerant strategy which takes into account environment and execution information to accordingly decide the appropriate recovery strategy. We present a preliminary study in order to analyze the impact on the CWS total execution time of different recovery strategies on different scenarios. The experimental results show that under different conditions, recovery strategies behave differently. This analysis represents a first step towards the definition of a model to dynamically decide which recovery strategy is the best choice by taking into account the context-information when the failure occurs

    A Self-adaptive Agent-based System for Cloud Platforms

    Full text link
    Cloud computing is a model for enabling on-demand network access to a shared pool of computing resources, that can be dynamically allocated and released with minimal effort. However, this task can be complex in highly dynamic environments with various resources to allocate for an increasing number of different users requirements. In this work, we propose a Cloud architecture based on a multi-agent system exhibiting a self-adaptive behavior to address the dynamic resource allocation. This self-adaptive system follows a MAPE-K approach to reason and act, according to QoS, Cloud service information, and propagated run-time information, to detect QoS degradation and make better resource allocation decisions. We validate our proposed Cloud architecture by simulation. Results show that it can properly allocate resources to reduce energy consumption, while satisfying the users demanded QoS

    Análisis de sentimientos en Twitter: Un estudio comparativo

    Get PDF
    Sentiment analysis helps to determine the perception of users in different aspects of daily life, such as product preferences in the market, level of user confidence in work environments, or political preferences. The idea is to predict trends or preferences based on feelings. In this article we evaluate the most common techniques used for this type of analysis, considering machine learning and deep machine learning techniques. Our main contribution is based on a proposal for a methodological strategy that covers the phases of data preprocessing, construction of predictive models and their evaluation. From the results, the best classical model was SVM, with 78% accuracy, and 79% F1 metric (F1 score). For the Deep Learning models, the classical models had the best results. The model with the best performance was the Deep Learning Long Short Term Memory (LSTM), reaching 88% accuracy and 89% F1 metric. The worst of the Deep Learning models was the CNN, with 77% accuracy as an F1 metric. Concluding that the Long Short Term Memory (LSTM) algorithm proved to be the best performance, reaching up to 89% accuracy.El análisis de sentimientos ayuda a determinar la percepción de usuarios en diferentes aspectos de la vida cotidiana, como preferencias de productos en el mercado, nivel de confianza de los usuarios en ambientes de trabajo, o preferencias políticas. La idea es predecir tendencias o preferencias basados en sentimientos. En este artículo evaluamos las técnicas más comunes usadas para este tipo de análisis, considerando técnicas de aprendizaje de máquina y aprendizaje de máquina profundo. Nuestra contribución principal se basa en una propuesta de una estrategia metodológica que abarca las fases de preprocesamiento de datos, construcción de modelos predictivos y su evaluación. De los resultados, el mejor modelo clásico fue SVM, con 78% de precisión, y 79% de métrica F1 (F1 score). Para los modelos de Deep Learning, con mejores resultados fueron los modelos clásicos. El modelo con mejor desempeño fue el de Deep Learning Long Short Term Memory (LSTM), alcanzando un 88% de precisión y 89% de métrica F1. El peor de los modelos de Deep Learning fue el CNN, con 77% de precisión como de métrica F1. Concluyendo que, el algoritmo Long Short Term Memory (LSTM) demostró ser el mejor rendimiento, alcanzando hasta un 89% de precisión.

    Semantic Web Datatype Inference: Towards Better RDF Matching

    Get PDF
    International audienceIn the context of RDF document matching/integration, the datatype information, which is related to literal objects, is an important aspect to be analyzed in order to better determine similar RDF documents. In this paper, we propose a datatype inference process based on four steps: (i) predicate information analysis (i.e., deduce the datatype from existing range property); (ii) analysis of the object value itself by a pattern-matching process (i.e., recognize the object lexical-space); (iii) semantic analysis of the predicate name and its context; and (iv) generalization of numeric and binary datatypes to ensure the integration. We evaluated the performance and the accuracy of our approach with datasets from DBpedia. Results show that the execution time of the inference process is linear and its accuracy can increase up to 97.10%. \textcopyright 2017, Springer International Publishing AG

    An event detection framework for the representation of the AGGIR variables

    Get PDF
    International audienceIn this paper, we propose a framework to study the AGGIR (Autonomy Gerontology Iso-Resources Groups) grid model, in order to evaluate the level of independency of elderly people, according to their capabilities of performing activities and interact with their environments over the time. To model the Activities of Daily Living (ADL), we also extend a previously proposed Domain Specific Language (DSL), in order to employ operators to deal with constraints related to time and location of activities, and event recognition. Our framework aims at providing an analysis tool regarding the performance of elder-ly/handicapped people within a home environment by means of data recovered from sensors using the iCASA simulator. To evaluate our approach, we pick three of the AGGIR variables (i.e., dressing, toileting, and transfers) and evaluate their testability in many scenarios, by means of records representing the occurrence of activities of the elderly. Results demonstrate the accuracy of our framework to manage the obtained records correctly and thus generate the appropriate event information

    Towards an Integrated Full-Stack Green Software Development Methodology

    Get PDF
    Existing green/eco responsible approaches for IT are frequently domain-specific and very focused on one topic. For example, some works are focused on saving energy with better virtual machine management on cloud infrastructures or data management in wireless sensor networks, in order to minimize the data transfers and sensors’ wakeups. Nevertheless, they consider only limited aspects in the whole software development process; indeed, very few researches propose a global approach. In this context, we envision a green development methodology that approaches energy saving aspects from the design phase and at all the system layers (software, hardware, user requirements, execution contexts, etc.), which can provide positive leverage as well as avoid side effects (one decision can be positive at one system layer but may trigger negative impact on other layers). We motivate the interest of this vision and describe key ideas regarding how to address these considerations in the development methodology

    MAS2DES-Onto: Ontology for MAS-based Digital Ecosystems

    Get PDF
    Multi-Agent Systems (MASs) have received much attention in recent years because of their advantages on modeling complex distributed systems, such Digital Ecosystems (DESs). Many existing modeling languages that support the design of such systems are based on ontologies to assist the representation of agents knowledge. However, in the context of DESs, there is still a need for more general conceptual models to represent the specific characteristics of DESs in terms of win-win interaction, engagement, equilibrium, and self-organization. Then, concepts such behavior, roles, rules, and environment are needed. This paper describes an ontologybased approach by proposing MAS2DES-Onto, as the conceptual model, which considers the essential static and dynamic aspects of MASs by a clear representation of their concepts and relationships to support the design and development of DESs. To validate and conduct experimental tests, we integrate MAS2DES-Onto into a framework to automatically generate MAS-based DESs. Results show the efficiency and effectiveness of our approach.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    Methodology to Evaluate WSN Simulators: Focusing on Energy Consumption Awareness

    Get PDF
    ISBN: 978-1-925953-09-1International audienceNowadays, there exists a large number of available network simulators, that differ in their design, goals, and characteristics. Users who have to decide which simulator is the most appropriate for their particular requirements, are today lost, faced with a panoply of disparate and diverse simulators. Hence, it is obvious the need for establishing guidelines that support users in the tasks of selecting and customizing a simulator to suit their preferences and needs. In previous works, we proposed a generic and novel methodological approach to evaluate network simulators, considering a set of qualitative and quantitative criteria. However, it lacks criteria related to Wireless Sensor Networks (WSN). Thus, the aim of this work is three fold: (i) extend the previous proposed methodology to include the evaluation of WSN simulators, such as energy consumption modelling and scalability; (ii) elaborate a study of the state of the art of WSN simulators, with the intention of identifying the most used and cited in scientific articles; and (iii) demonstrate the suitability of our novel methodology by evaluating and comparing three of the most cited simulators. Our novel methodology provides researchers with an evaluation tool that can be used to describe and compare WSN simulators in order to select the most appropriate one for a given scenario

    WSN simulators evaluation: an approach focusing on energy awareness

    Get PDF
    The large number of Wireless Sensor Networks (WSN) simulators available nowadays, differ in their design, goals, and characteristics. Users who have to decide which simulator is the most appropriate for their particular requirements, are today lost, faced with a panoply of disparate and diverse simulators. Hence, it is obvious the need for establishing guidelines that support users in the tasks of selecting a simulator to suit their preferences and needs. In previous works, we proposed a generic and novel approach to evaluate networks simulators, considering a methodological process and a set of qualitative and quantitative criteria. In particularly, for WSN simulators, the criteria include relevant aspects for this kind of networks, such as energy consumption modelling and scalability capacity. The aims of this work are: (i) describe deeply the criteria related to WSN aspects; (ii) extend and update the state of the art of WSN simulators elaborated in our previous works to identify the most used and cited in scientific articles; and (iii) demonstrate the suitability of our novel methodological approach by evaluating and comparing the three most cited simulators, specially in terms of energy modelling and scalability capacities. Results show that our proposed approach provides researchers with an evaluation tool that can be used to describe and compare WSN simulators in order to select the most appropriate one for a given scenarioComment: 20 Page
    corecore